15 research outputs found

    Unifying Skill-Based Programming and Programming by Demonstration through Ontologies

    Get PDF
    Smart manufacturing requires easily reconfigurable robotic systems to increase the flexibility in presence of market uncertainties by reducing the set-up times for new tasks. One enabler of fast reconfigurability is given by intuitive robot programming methods. On the one hand, offline skill-based programming (OSP) allows the definition of new tasks by sequencing pre-defined, parameterizable building blocks termed as skills in a graphical user interface. On the other hand, programming by demonstration (PbD) is a well known technique that uses kinesthetic teaching for intuitive robot programming, where this work presents an approach to automatically recognize skills from the human demonstration and parameterize them using the recorded data. The approach further unifies both programming modes of OSP and PbD with the help of an ontological knowledge base and empowers the end user to choose the preferred mode for each phase of the task. In the experiments, we evaluate two scenarios with different sequences of programming modes being selected by the user to define a task. In each scenario, skills are recognized by a data-driven classifier and automatically parameterized from the recorded data. The fully defined tasks consist of both manually added and automatically recognized skills and are executed in the context of a realistic industrial assembly environment

    Explainability and Knowledge Representation in Robotics: The Green Button Challenge

    Get PDF
    As robots get closer to human environments, a fundamental task for the community is to design system behaviors that foster trust. In this context, we have posed the "Green Button Challenge": every robot should have a green button that, when pressed, makes the robot explain what it is doing and why, in natural language. In this paper, we motivate why explainability is important in robotics, an why explicit knowledge representations are essential to achieving it. We highlight this with a concrete proof-of-concept implementation on our humanoid space assistant Rollin' Justin, which interprets its PDDL plans to explain what it is doing and why

    Robotic world models – conceptualization, review, and engineering best practices

    Get PDF
    The term "world model" (WM) has surfaced several times in robotics, for instance, in the context of mobile manipulation, navigation and mapping, and deep reinforcement learning. Despite its frequent use, the term does not appear to have a concise definition that is consistently used across domains and research fields. In this review article, we bootstrap a terminology for WMs, describe important design dimensions found in robotic WMs, and use them to analyze the literature on WMs in robotics, which spans four decades. Throughout, we motivate the need for WMs by using principles from software engineering, including "Design for use," "Do not repeat yourself," and "Low coupling, high cohesion." Concrete design guidelines are proposed for the future development and implementation of WMs. Finally, we highlight similarities and differences between the use of the term "world model" in robotic mobile manipulation and deep reinforcement learning

    Soziotechnisches Assistenzsystem zur lernförderlichen Arbeitsgestaltung in der robotergestützten Montage

    Get PDF
    Dieser Beitrag der Zeitschrift Gruppe. Interaktion. Organisation. (GIO) widmet sich der lernförderlichen Gestaltung eines roboterbasierten Assistenzsystems für industrielle Montagetätigkeiten. Individualisierte Produkte, kleinere Losgrößen und beschleunigte Prozesse sind Aspekte des digitalen Wandels in der industriellen Fertigung und Teil des Leitbilds einer flexiblen Produktion. Mensch-Roboter-Kollaboration und wissensbasiertes Engineering sind aktuelle Ansätze, um diesen Anforderungen gerecht zu werden. Dieser Artikel stellt an einem Anwendungsbeispiel (Verdrahtung von Schaltschränken) einen ersten Ansatz vor, wie mittels wissensbasierter Technologien (v. a. Ontologien und deren logische Interpretation) Vorschläge zur Arbeitszuteilung zwischen Menschen und Robotern sowohl nach ökonomischen als auch nach Kriterien der humanen Arbeitsgestaltung automatisiert erstellt werden können. Zum einen können für jede Aufgabe ihre Anforderungen mit den individuellen Fähigkeiten und Stärken der Beschäftigten sowie mit denen des kollaborativen Robotersystems in einem Mixed-Skill-Konzept nach betrieblichen Kennziffern (z. B. Zeit, Qualität) abgeglichen werden, um sich so einem optimalen Produktionsablauf anzunähern. Zum anderen können dabei Aspekte einer humanen Gestaltung der Mensch- Maschine-Interaktion (MMI) berücksichtigt werden, die vordringlich mit Blick auf Lernförderlichkeit zusammengeführt werden. Lernförderlichkeit in der MMI setzt Zeit, Handlungsräume und förderliche Inhalte voraus und ist zugleich eine zunehmende Notwendigkeit, um vorausschauend auf den dynamisierten Wandel von Arbeit zu reagieren. Denn gerade mit dem Technikeinsatz ist ein starker Tätigkeitswandel in Richtung Entscheidungsträger und Problemlöser verbunden, der neben Qualifizierung und Weiterbildung auch niedrigschwelliger, arbeitsintegrierter Lerngelegenheiten bedarf. Gerade die kollaborative Robotik als Schlüsseltechnologie der flexiblen Fertigung macht es nötig, neue Konzepte für die Organisation des hybriden Zusammenwirkens von Mensch und Roboter zu entwickeln. Im Folgenden wird aufbauend auf den grundle- genden Ansatz das Konzept eines technischen Demonstrators vorgestellt, welches entlang eines empirischen Fallbeispiels entwickelt wurde. Die prototypische, technische Umsetzung basiert auf einer Arbeitsumgebung mit einem Roboterarm und zugehörigen Werkzeugen, formalen semantischen Beschreibungen der Fähigkeiten und Tätigkeiten von Menschen und Robotern sowie einer intuitiven Benutzeroberfläche, unter anderem für die individuelle Anpassung der generierten Arbeitszuteilungen

    Extending the Knowledge Driven Approach for Scalable Autonomy Teleoperation of a Robotic Avatar

    Get PDF
    Crewed missions to celestial bodies such as Moon and Mars are in the focus of an increasing number of space agencies. Precautions to ensure a safe landing of the crew on the extraterrestrial surface, as well as reliable infrastructure on the remote location, for bringing the crew back home are key considerations for mission planning. The European Space Agency (ESA) identified in its Terrae Novae 2030+ roadmap, that robots are needed as precursors and scouts to ensure the success of such missions. An important role these robots will play, is the support of the astronaut crew in orbit to carry out scientific work, and ultimately ensuring nominal operation of the support infrastructure for astronauts on the surface. The METERON SUPVIS Justin ISS experiments demonstrated that supervised autonomy robot command can be used for executing inspection, maintenance and installation tasks using a robotic co-worker on the planetary surface. The knowledge driven approach utilized in the experiments only reached its limits when situations arise that were not anticipated by the mission design. In deep space scenarios, the astronauts must be able to overcome these limitations. An approach towards more direct command of a robot was demonstrated in the METERON ANALOG-1 ISS experiment. In this technical demonstration, an astronaut used haptic telepresence to command a robotic avatar on the surface to execute sampling tasks. In this work, we propose a system that combines supervised autonomy and telepresence by extending the knowledge driven approach. The knowledge management is based on organizing the prior knowledge of the robot in an object-centered context. Action Templates are used to define the knowledge on the handling of the objects on a symbolic and geometric level. This robot-agnostic system can be used for supervisory command of any robotic coworker. By integrating the robot itself as an object into the object-centered domain, robot-specific skills and (tele-)operation modes can be injected into the existing knowledge management system by formulating respective Action Templates. In order to efficiently use advanced teleoperation modes, such as haptic telepresence, a variety of input devices are integrated into the proposed system. This work shows how the integration of these devices is realized in a way that is agnostic to the input devices and operation modes. The proposed system is evaluated in the Surface Avatar ISS experiment. This work shows how the system is integrated into a Robot Command Terminal featuring a 3-Degree-of-Freedom Joystick and a 7-Degree-of-Freedom haptic input device in the Columbus module of the ISS. In the preliminary experiment sessions of Surface Avatar, two astronauts on orbit took command of the humanoid service robot Rollin' Justin in Germany. This work presents and discusses the results of these ISS-to-ground sessions and derives requirements for extending the scalable autonomy system for the use with a heterogeneous robotic team

    Audio Perception in Robotic Assistance for Human Space Exploration: A Feasibility Study

    Get PDF
    Future crewed missions beyond low earth orbit will greatly rely on the support of robotic assistance platforms to perform inspection and manipulation of critical assets. This includes crew habitats, landing sites or assets for life support and operation. Maintenance and manipulation of a crewed site in extra-terrestrial environments is a complex task and the system will have to face different challenges during operation. While most may be solved autonomously, in certain occasions human intervention will be required. The telerobotic demonstration mission, Surface Avatar, led by the German Aerospace Center (DLR), with partner European Space Agency (ESA), investigates different approaches offering astronauts on board the International Space Station (ISS) control of ground robots in representative scenarios, e.g. a Martian landing and exploration site. In this work we present a feasibility study on how to integrate auditory information into the mentioned application. We will discuss methods for obtaining audio information and localizing audio sources in the environment, as well as fusing auditory and visual information to perform state estimation based on the gathered data. We demonstrate our work in different experiments to show the effectiveness of utilizing audio information, the results of spectral analysis of our mission assets, and how this information could help future astronauts to argue about the current mission situation

    On Realizing Multi-Robot Command through Extending the Knowledge Driven Teleoperation Approach

    Get PDF
    Future crewed planetary missions will strongly depend on the support of crew-assistance robots for setup and inspection of critical assets, such as return vehicles, before and after crew arrival. To efficiently accomplish a high variety of tasks, we envision the use of a heterogeneous team of robots to be commanded on various levels of autonomy. This work presents an intuitive and versatile command concept for such robot teams using a multi-modal Robot Command Terminal (RCT) on board a crewed vessel. We employ an object-centered prior knowledge management that stores the information on how to deal with objects around the robot. This includes knowledge on detecting, reasoning on, and interacting with the objects. The latter is organized in the form of Action Templates (ATs), which allow for hybrid planning of a task, i.e. reasoning on the symbolic and the geometric level to verify the feasibility and find a suitable parameterization of the involved actions. Furthermore, by also treating the robots as objects, robot-specific skillsets can easily be integrated by embedding the skills in ATs. A Multi-Robot World State Representation (MRWSR) is used to instantiate actual objects and their properties. The decentralized synchronization of the MRWSR of multiple robots supports task execution when communication between all participants cannot be guaranteed. To account for robot-specific perception properties, information is stored independently for each robot, and shared among all participants. This enables continuous robot- and command-specific decision on which information to use to accomplish a task. A Mission Control instance allows to tune the available command possibilities to account for specific users, robots, or scenarios. The operator uses an RCT to command robots based on the object-based knowledge representation, whereas the MRWSR serves as a robot-agnostic interface to the planetary assets. The selection of a robot to be commanded serves as top-level filter for the available commands. A second filter layer is applied by selecting an object instance. These filters reduce the multitude of available commands to an amount that is meaningful and handleable for the operator. Robot-specific direct teleoperation skills are accessible via their respective AT, and can be mapped dynamically to available input devices. Using AT-specific parameters provided by the robot for each input device allows a robot-agnostic usage, as well as different control modes e.g. velocity, model-mediated, or domain-based passivity control based on the current communication characteristics. The concept will be evaluated on board the ISS within the Surface Avatar experiments

    Introduction to Surface Avatar: the First Heterogeneous Robotic Team to be Commanded with Scalable Autonomy from the ISS

    Get PDF
    Robotics is vital to the continued development toward Lunar and Martian exploration, in-situ resource utilization, and surface infrastructure construction. Large-scale extra-terrestrial missions will require teams of robots with different, complementary capabilities, together with a powerful, intuitive user interface for effective commanding. We introduce Surface Avatar, the newest ISS-to-Earth telerobotic experiment series, to be conducted in 2022-2024. Spearheaded by DLR, together with ESA, Surface Avatar builds on expertise on commanding robots with different levels of autonomy from our past telerobotic experiments: Kontur-2, Haptics, Interact, SUPVIS Justin, and Analog-1. A team of four heterogeneous robots in a multi-site analog environment at DLR are at the command of a crew member on the ISS. The team has a humanoid robot for dexterous object handling, construction and maintenance; a rover for long traverses and sample acquisition; a quadrupedal robot for scouting and exploring difficult terrains; and a lander with robotic arm for component delivery and sample stowage. The crew's command terminal is multimodal, with an intuitive graphical user interface, 3-DOF joystick, and 7-DOF input device with force-feedback. The autonomy of any robot can be scaled up and down depending on the task and the astronaut's preference: acting as an avatar of the crew in haptically-coupled telepresence, or receiving task-level commands like an intelligent co-worker. Through crew performing collaborative tasks in exploration and construction scenarios, we hope to gain insight into how to optimally command robots in a future space mission. This paper presents findings from the first preliminary session in June 2022, and discusses the way forward in the planned experiment sessions

    Learning Semantic State Representations in Continuous Domains

    No full text
    Universell einsetzbare Serviceroboter müssen in der Lage sein vielfältige Aufgaben zu erledigen. Mögliche Anwendungen reichen von der Unterstützung von Pflegepersonal in der Altenpflegebis hin zur Wartung von Solarmodulen auf dem Mars. Verglichen mit Industrierobotern, die hauptsächlich in strukturierter, überwachbarer Umgebung agieren, unterliegt die Umgebung von Servicerobotern kontinuierlichen Änderungen. Diese Änderungen können beispielsweise durch Menschen geschehen, die mit der Umgebung interagieren. Aufgrund der Komplexität dieser Umgebungen, in denen Serviceroboter agieren müssen, werden sie unter dem Paradigma der supervised Autonomy betrieben. Unter supervised Autonomy versteht man, dass menschliche Operatoren in der Lage sind abstrakte Aufgaben zu definieren und die Roboter daraufhin selbstständig Aktionsketten planen, mit denen das Ziel erreicht werden kann. Um solche komplexen Handlungsabläufe zu planen wird neben der geometrischen Planung auch die symbolische Planung eingesetzt. Auf der Grundlage von Aktionen, die sowohl eine symbolische als auch eine geometrische Beschreibung enthalten, können Roboter abschätzen, welche symbolischen Voraussetzungen für die Aktionen gegeben sein müssen und wie die Aktionen die symbolische Welt verändern. Dieses Wissen wird verwendet, um Aktionsketten zu planen, die das gewünschte Ziel erfüllen. Aufgrund der sich ständig ändernden Umgebungen kann sich die Welt während der Ausführung von Aktionssequenzen verändern. Allerdings sind Roboter noch nicht in der Lage diese Veränderungen der Welt während der Ausführung von Aktionssequenzen zu erkennen. Momentan ist es so, dass Serviceroboter eine fehlerfreie Ausführung annehmen, nachdem sie eine Aktionskette erfolgreich geplant haben. Dabei ist es egal, ob bei der Ausführung etwas schief geht. Um dieses Problem zu beheben, müssen Roboter in der Lage sein, Abweichungen vom erforderlichen symbolischen Zustand zu erkennen. Um diese Abweichungen in der realen Welt erkennen zu können müssen Roboter eine Repräsentation von Symbolen in Sensorinformationen kennen. Diese Arbeit beschäftigt sich mit den oben genannten Herausforderungen. Um Symbole mit Sensorinformationen zu verknüpfen, untersuchen wir erst die Bedeutung des Begriffs Symbol und betrachten die Erdung der Symbole in Sensorinformationen von verschiedenen Perspektiven. Wir verwenden maschinelles Lernen für die Verifizierung verschiedener Symbole mittels Sensorinformationen. Insbesondere betrachten wir Algorithmen zur Ausreißererkennung, da diese gute Eigenschaften haben um mit wenigen Trainingsdaten zurechtzukommen. Dieser Ansatz lässt sich als Top-Down-Verifikation von Symbolen beschreiben. Jedoch birgt dieser Ansatz Nachteile auf Grund der Interpretierbarkeit von Symbolen. Um diese Nachteile zu kompensieren, untersuchen wir wie symbolische Emergenz eingesetzt werden kann. Hierfür zeigen wir, wie mit Hilfe von multimodaler Latent Dirichlet Allocation latente Symbole unüberwacht gelernt werden können. Auf Basis des Top-Down Verifikation Ansatzes implementieren wir ein Verfahren zur Überwachung von Aktionssequenzen. Dieses Verfahren integrieren wir in die Roboterplattform Rollin Justin, einem am Deutschen Zentrum für Luft- und Raumfahrt entwickelten humanoiden Roboter. In diesem Prozess werden die trainierten Modelle des maschinellen Lernens mittels kleiner Python Codeschnipsel integriert, die Code für die Vorverarbeitung von Sensordaten und die Verifikation von Symbolen anhand der trainierten Modelle enthalten. Dieser Verifikationsprozess ebnet Robotern den Weg auf Änderungen in der Umgebung während der Ausführung von Aktionssequenzen zu reagieren

    Unsupervised symbol emergence for supervised autonomy using multi-modal latent Dirichlet allocations

    No full text
    In future Mars exploration scenarios, astronauts orbiting the planet will control robots on the surface with supervised autonomy to construct infrastructure necessary for human habitation. Symbol-based planning enables intuitive supervised teleoperation by presenting relevant action possibilities to the astronaut. While our initial analog experiments aboard the International Space Station (ISS) proved this scenario to be very effective, the complexity of the problem puts high demands on domain models. However, the symbols used in symbolic planning are error-prone as they are often hand-crafted and lack a mapping to actual sensor information. While this may lead to biased action definitions, the lack of feedback is even more critical. To overcome these issues, this paper explores the possibility of learning the mapping between multi-modal sensor information and high-level preconditions and effects of robot actions. To achieve this, we propose to utilize a Multi-modal Latent Dirichlet Allocation (MLDA) for unsupervised symbol emergence. The learned representation is used to identify domain-specific design flaws and assis in supervised autonomy robot operation by predicting action feasibility and assessing the execution outcome. The approach is evaluated in a realistic telerobotics experiment conducted with the humanoid robot Rollin' Justin.
    corecore